242 research outputs found

    Evaluation of Kermeta for Solving Graph-based Problems

    Get PDF
    Kermeta is a meta-language for specifying the structure and behavior of graphs of interconnected objects called models. In this paper,\ud we show that Kermeta is relatively suitable for solving three graph-based\ud problems. First, Kermeta allows the specification of generic model\ud transformations such as refactorings that we apply to different metamodels\ud including Ecore, Java, and Uml. Second, we demonstrate the extensibility\ud of Kermeta to the formal language Alloy using an inter-language model\ud transformation. Kermeta uses Alloy to generate recommendations for\ud completing partially specified models. Third, we show that the Kermeta\ud compiler achieves better execution time and memory performance compared\ud to similar graph-based approaches using a common case study. The\ud three solutions proposed for those graph-based problems and their\ud evaluation with Kermeta according to the criteria of genericity,\ud extensibility, and performance are the main contribution of the paper.\ud Another contribution is the comparison of these solutions with those\ud proposed by other graph-based tools

    Industry–Academia Research Collaboration and Knowledge Co-creation: Patterns and Anti-patterns

    Get PDF
    Increasing the impact of software engineering research in the software industry and the society at large has long been a concern of high priority for the software engineering community. The problem of two cultures, research conducted in a vacuum (disconnected from the real world), or misaligned time horizons are just some of the many complex challenges standing in the way of successful industry–academia collaborations. This article reports on the experience of research collaboration and knowledge co-creation between industry and academia in software engineering as a way to bridge the research–practice collaboration gap. Our experience spans 14 years of collaboration between researchers in software engineering and the European and Norwegian software and IT industry. Using the participant observation and interview methods, we have collected and afterwards analyzed an extensive record of qualitative data. Drawing upon the findings made and the experience gained, we provide a set of 14 patterns and 14 anti-patterns for industry–academia collaborations, aimed to support other researchers and practitioners in establishing and running research collaboration projects in software engineering.publishedVersio

    Deep learning to predict power output from respiratory inductive plethysmography data

    Get PDF
    Power output is one of the most accurate methods for measuring exercise intensity during outdoor endurance sports, since it records the actual effect of the work performed by the muscles over time. However, power meters are expensive and are limited to activity forms where it is possible to embed sensors in the propulsion system such as in cycling. We investigate using breathing to estimate power output during exercise, in order to create a portable method for tracking physical effort that is universally applicable in many activity forms. Breathing can be quantified through respiratory inductive plethysmography (RIP), which entails recording the movement of the rib cage and abdomen caused by breathing, and it enables us to have a portable, non-invasive device for measuring breathing. RIP signals, heart rate and power output were recorded during a N-of-1 study of a person performing a set of workouts on a stationary bike. The recorded data were used to build predictive models through deep learning algorithms. A convolutional neural network (CNN) trained on features derived from RIP signals and heart rate obtained a mean absolute percentage error (MAPE) of 0.20 (ie, 20% average error). The model showed promising capability of estimating correct power levels and reactivity to changes in power output, but the accuracy is significantly lower than that of cycling power meters.publishedVersio

    Portinari: A Data Exploration Tool to Personalize Cervical Cancer Screening

    Full text link
    Socio-technical systems play an important role in public health screening programs to prevent cancer. Cervical cancer incidence has significantly decreased in countries that developed systems for organized screening engaging medical practitioners, laboratories and patients. The system automatically identifies individuals at risk of developing the disease and invites them for a screening exam or a follow-up exam conducted by medical professionals. A triage algorithm in the system aims to reduce unnecessary screening exams for individuals at low-risk while detecting and treating individuals at high-risk. Despite the general success of screening, the triage algorithm is a one-size-fits all approach that is not personalized to a patient. This can easily be observed in historical data from screening exams. Often patients rely on personal factors to determine that they are either at high risk or not at risk at all and take action at their own discretion. Can exploring patient trajectories help hypothesize personal factors leading to their decisions? We present Portinari, a data exploration tool to query and visualize future trajectories of patients who have undergone a specific sequence of screening exams. The web-based tool contains (a) a visual query interface (b) a backend graph database of events in patients' lives (c) trajectory visualization using sankey diagrams. We use Portinari to explore diverse trajectories of patients following the Norwegian triage algorithm. The trajectories demonstrated variable degrees of adherence to the triage algorithm and allowed epidemiologists to hypothesize about the possible causes.Comment: Conference paper published at ICSE 2017 Buenos Aires, at the Software Engineering in Society Track. 10 pages, 5 figure

    Discovering Model Transformation Pre-conditions using Automatically Generated Test Models

    Get PDF
    International audienceSpecifying a model transformation is challenging as it must be able to give a meaningful output for any input model in a possibly infinite modeling domain. Transformation preconditions constrain the input domain by rejecting input models that are not meant to be transformed by a model transformation. This paper presents a systematic approach to discover such preconditions when it is hard for a human developer to foresee complex graphs of objects that are not meant to be transformed. The approach is based on systematically generating a finite number of test models using our tool, PRAMANA to first cover the input domain based on input domain partitioning. Tracing a transformation's execution reveals why some preconditions are missing. Using a benchmark transformation from simplified UML class diagram models to RDBMS models we discover new preconditions that were not initially specified

    Using Models of Partial Knowledge to Test Model Transformations

    Get PDF
    International audienceTesters often use partial knowledge to build test models. This knowledge comes from sources such as requirements, known faults, existing inputs, and execution traces. In Model-Driven Engineering, test inputs are models executed by model transformations. Modelers build them using partial knowledge while meticulously satisfying several well-formedness rules imposed by the modelling language. This manual process is tedious and language constraints can force users to create complex models even for representing simple knowledge. In this paper, we want to simplify the development of test models by presenting an integrated methodology and semi-automated tool that allow users to build only small partial test models directly representing their testing intent. We argue that partial models are more readable and maintainable and can be automatically completed to full input models while considering language constraints. We validate this approach by evaluating the size and fault-detecting effectiveness of partial models compared to traditionally-built test models. We show that they can detect the same bugs/faults with a greatly reduced development effort

    UDAVA: an unsupervised learning pipeline for sensor data validation in manufacturing

    Get PDF
    Manufacturing has enabled the mechanized mass production of the same (or similar) products by replacing craftsmen with assembly lines of machines. The quality of each product in an assembly line greatly hinges on continual observation and error compensation during machining using sensors that measure quantities such as position and torque of a cutting tool and vibrations due to possible imperfections in the cutting tool and raw material. Patterns observed in sensor data from a (near-)optimal production cycle should ideally recur in subsequent production cycles with minimal deviation. Manually labeling and comparing such patterns is an insurmountable task due to the massive amount of streaming data that can be generated from a production process. We present UDAVA, an unsupervised machine learning pipeline that automatically discovers process behavior patterns in sensor data for a reference production cycle. UDAVA performs clustering of reduced dimensionality summary statistics of raw sensor data to enable high-speed clustering of dense time-series data. It deploys the model as a service to verify batch data from subsequent production cycles to detect recurring behavior patterns and quantify deviation from the reference behavior. We have evaluated UDAVA from an AI Engineering perspective using two industrial case studies.publishedVersio

    Uncertainty-aware Virtual Sensors for Cyber-Physical Systems

    Get PDF
    Abstract—Virtual sensors in Cyber-Physical Systems (CPS) are AI replicas of physical sensors that can mimic their behavior by processing input data from other sensors monitoring the same system. However, we cannot always trust these replicas due to uncertainty ensuing from changes in environmental conditions, measurement errors, model structure errors, and unknown input data. An awareness of numerical uncertainty in these models can help ignore some predictions and communicate limitations for responsible action. We present a data pipeline to train and deploy uncertainty-aware virtual sensors in CPS. Our virtual sensor based on a Bayesian Neural Network (BNN) predicts the expected values of a physical sensor and a standard deviation indicating the degree of uncertainty in its predictions. We discuss how this uncertainty awareness bolsters trustworthy AI using a vibration-sensing virtual sensor in automotive manufacturing.Acknowledgement The work has been conducted as part of the InterQ project (958357) and the DAT4.ZERO project (958363) funded by the European Commission within the Horizon 2020 research and innovation programme

    Crowdpinion: Motivating People to Share Their Momentary Opinion

    Get PDF
    Abstract Many interesting social studies can be done by asking people about what they think, feel or experience at the moment when certain events occur in their daily lives. These studies can be conducted based on the event-contingent protocol of the Experience Sampling Method. We have implemented this protocol in Crowdpinion -our software tool consisting of a web panel where the researchers can set up and control their studies and a mobile app for the responders. In order to extend the users' motivation beyond the will to contribute to research, we have applied some gamification elements based on fostering curiosity by gradually revealing the big picture from the overall study. In this paper we describe the concepts of Crowdpinion as a research tool, our approach to gamification, how we tested it in the beta version and present plans for future improvement
    • …
    corecore